11 research outputs found

    On the Development of a Generic Multi-Sensor Fusion Framework for Robust Odometry Estimation

    Get PDF
    In this work we review the design choices, the mathematical and software engineering techniques employed in the development of the ROAMFREE sensor fusion library, a general, open-source framework for pose tracking and sensor parameter self-calibration in mobile robotics. In ROAMFREE, a comprehensive logical sensor library allows to abstract from the actual sensor hardware and processing while preserving model accuracy thanks to a rich set of calibration parameters, such as biases, gains, distortion matrices and geometric placement dimensions. The modular formulation of the sensor fusion problem, which is based on state-of-the-art factor graph inference techniques, allows to handle arbitrary number of multi-rate sensors and to adapt to virtually any kind of mobile robot platform, such as Ackerman steering vehicles, quadrotor unmanned aerial vehicles, omni-directional mobile robots. Different solvers are available to target high-rate online pose tracking tasks and offline accurate trajectory smoothing and parameter calibration. The modularity, versatility and out-of-the-box functioning of the resulting framework came at the cost of an increased complexity of the software architecture, with respect to an ad-hoc implementation of a platform dependent sensor fusion algorithm, and required careful design of abstraction layers and decoupling interfaces between solvers, state variables representations and sensor error models. However, we review how a high level, clean, C++/Python API, as long as ROS interface nodes, hide the complexity of sensor fusion tasks to the end user, making ROAMFREE an ideal choice for new, and existing, mobile robot projects

    On the Development of a Generic Multi-Sensor Fusion Framework for Robust Odometry Estimation

    No full text
    In this work we review the design choices, the mathematical and software engineering techniques employed in the development of the ROAMFREE sensor fusion library, a general, open-source framework for pose tracking and sensor parameter self-calibration in mobile robotics. In ROAMFREE, a comprehensive logical sensor library allows to abstract from the actual sensor hardware and processing while preserving model accuracy thanks to a rich set of calibration parameters, such as biases, gains, distortion matrices and geometric placement dimensions. The modular formulation of the sensor fusion problem, which is based on state-of-the-art factor graph inference techniques, allows to handle arbitrary number of multi-rate sensors and to adapt to virtually any kind of mobile robot platform, such as Ackerman steering vehicles, quadrotor unmanned aerial vehicles, omni-directional mobile robots. Different solvers are available to target high-rate online pose tracking tasks and offline accurate trajectory smoothing and parameter calibration. The modularity, versatility and out-of-the-box functioning of the resulting framework came at the cost of an increased complexity of the software architecture, with respect to an ad-hoc implementation of a platform dependent sensor fusion algorithm, and required careful design of abstraction layers and decoupling interfaces between solvers, state variables representations and sensor error models. However, we review how a high level, clean, C++/Python API, as long as ROS interface nodes, hide the complexity of sensor fusion tasks to the end user, making ROAMFREE an ideal choice for new, and existing, mobile robot projects

    Camera Calibration Models and Methods for Corridor Mapping with UAVs

    No full text
    Camera calibration refers to the modeling of the relationship between the coordinates of object points and their projections on the image plane. This is usually done by parametric models that describe the physical properties of the lens systems and camera assemblies, such as the camera principal distance, the principal point, and various types of optical distortions. In photogrammetry, accurate knowledge of the parameters of such models, often referred to as Interior Orientation(IO), is of ultimate importance. In this work, we target advanced corridor mapping applications with UAVs. In this scenario, the camera calibration is not completely observable due to the unfavorable geometry of the flight trajectory (e.g., no cross-flight lines available and a single altitude) and needs to be determined beforehand. Further challenges are introduced by the limited mechanical stability of UAV-grade cameras. This may cause slight variations in the IO that need to be recovered while processing production flights. We review and compare two well-known camera models, the Brown-Conrady and the Ebner's self-calibration functions, in 36 calibration setups and provide a discussion of the results, where sub ground sampling distance accuracy in the checkpoints was achieved for some, but not all, configurations

    Bundle adjustment with raw inertial observations

    No full text
    t is well known that accurate aerial position and attitude control is beneficial for image orientation in airborne photogrammetry. The aerial control is traditionally obtained by Kalman filtering/smoothing inertial and GNSS observations prior to the bundle-adjustment. However, in Micro Aerial Vehicles this process may result in poor attitude determination due to the limited quality of the inertial sensors, large alignment uncertainty and residual correlations between sensor biases and initial attitude. We propose to include the raw inertial observations directly into the bundle-adjustment instead of as position and attitude weighted observations from a separate inertial/GNSS fusion step. The necessary observation models are derived in detail within the context of the so called “Dynamic Networks”. We examine different real world cases and we show that the proposed approach is superior to the established processing pipeline in challenging scenarios such as mapping in corridors and in areas where the reception of GNSS signals is denied

    Kinematic trajectory tracking controller for an all-terrain Ackermann steering vehicle

    No full text
    In the last few years the number of applications of autonomous mobile robots to outdoor and off-road tasks, like border surveillance and monitoring, search and rescue, agriculture, driveless mobility has been rapidly increasing. Among the huge number of functionalities that are required to make a vehicle an autonomous robot, localisation, path planning and trajectory tracking are the most important. This paper proposes a novel approach to solve the trajectory tracking problem of an Ackermann steering vehicle. A multi-body dynamic model of the vehicle is proposed as a mean to tune and validate the tracking controller. The proposal is supported by an experimental validation, that shows the effectiveness of the trajectory tracking controller and its performance in comparison with the accuracy of the localisation algorithm

    Pose Tracking and Sensor Self-Calibration for an All-terrain Autonomous Vehicle

    No full text
    In this work we address the simultaneous pose tracking and sensor self-calibration problem by applying a pose-graph optimization approach. A factor-graph is employed to store robot pose estimates at different time instants and calibration parameters such as magnetometer hard and soft iron distortion and gyroscope bias. Specific factors are developed in this paper to handle Ackermann kinematic readings, inertial measurement units, magnetometers and global positioning systems. An experimental evaluation supports the viability of the approach considering an autonomous all-terrain vehicle, for which we perform calibration and real-time pose tracking during navigation

    Photometric Long-Range Positioning of LED Targets for Cooperative Navigation in UAVs

    No full text
    Autonomous flight with unmanned aerial vehicles (UAVs) nowadays depends on the availability and reliability of Global Navigation Satellites Systems (GNSS). In cluttered outdoor scenarios, such as narrow gorges, or near tall artificial structures, such as bridges or dams, reduced sky visibility and multipath effects compromise the quality and the trustworthiness of the GNSS position fixes, making autonomous, or even manual, flight difficult and dangerous. To overcome this problem, cooperative navigation has been proposed: a second UAV flies away from any occluding objects and in line of sight from the first and provides the latter with positioning information, removing the need for full and reliable GNSS coverage in the area of interest. In this work we use high-power light-emitting diodes (LEDs) to signalize the second drone and we present a computer vision pipeline that allows to track the second drone in real-time from a distance up to 100 m and to compute its relative position with decimeter accuracy. This is based on an extension to the classical iterative algorithm for the Perspective-n-Points problem in which the photometric error is minimized according to a image formation model. This extension allow to substantially increase the accuracy of point-feature measurements in image space (up to 0.05 pixels), which directly translates into higher positioning accuracy with respect to conventional methods

    An analysis of a gyro-free inertial system for INS/GNSS navigation

    No full text
    Although the concept of determining angular velocity out of the readings of multiple, displaced accelerometers is not new, apparently it has not yet matured into practical technologies. A new generation of low-cost MEMS accelerometers, which is expected to breakthrough in the next years, may enable gyro-free navigation in small platforms such as UAVs, ground vehicles and small planes. In this work we analyze a Gyro-Free Inertial Navigation System (GF-INS) employing four triads of accelerometers in a Distributed Redundant IMU configuration (DRIMU). We investigate the statistical properties of the angular velocity estimator by means of simulations and we speculate on the noise level requirements for the accelerometers that would allow a gyro-free navigation system to match the performances of a conventional IMU. Furthermore, we investigate the performance of using the gyro-free angular velocities estimator in combination with GNSS for navigation in a classical error-state EKF

    A general approach to time-varying parameters in pose-graph optimization

    No full text
    Pose-graph optimization is becoming popular as a tool for solving position and attitude determination problems, especially in the context of Visual Simultaneous Localization and Mapping (V-SLAM). Recently proprioceptive information sources are appearing in this context, such as inertial measurement units and kinematic/dynamic models. These models require other quantities to be estimated along with camera poses and landmark 3D positions. Examples are IMU bias processes, friction coefficients and other process modeling parameters. In this work we propose a general approach to the estimation of time varying parameters in pose-graph optimization: we store parameter samples at arbitrary rate in auxiliary vertices and we employ interpolation schemes to recover their value at sensor readings timestamps. Prior knowledge or stochastic process models can be plugged in as additional edges incident in parameter nodes. Our approach is evaluated in the context of inertial navigation, where accelerometer and gyroscope bias processes need to be properly modeled and estimated
    corecore